18 research outputs found

    Integrating computation into the mechanistic hierarchy in the cognitive and neural sciences

    Get PDF
    It is generally accepted that, in the cognitive sciences, there are both computational and mechanistic explanations. We ask how computational explanations can integrate into the mechanistic hierarchy. The problem stems from the fact that implementation and mechanistic relations have different forms. The implementation relation, from the states of an abstract computational system (e.g., an automaton) to the physical, implementing states is a homomorphism mapping relation. The mechanistic relation, however, is that of part/whole; the explanans in a mechanistic explanation are components of the explanandum phenomenon. Moreover, each component in one level of mechanism is constituted and explained by components of an underlying level of mechanism. Hence, it seems, computational variables and functions cannot be mechanistically explained by the medium-dependent properties that implement them. How then, do the computational and implementational properties integrate to create the mechanistic hierarchy? After explicating the general problem (section 2), we further demonstrate it through a concrete example, of reinforcement learning, in cognitive neuroscience (sections 3 and 4). We then examine two possible solutions (section 5). On one solution, the mechanistic hierarchy embeds at the same levels computational and implementational properties. This picture fits with the view that computational explanations are mechanism sketches. On the other solution, there are two separate hierarchies, one computational and another implementational, which are related by the implementation relation. This picture fits with the view that computational explanations are functional and autonomous explanations. It is less clear how these solutions fit with the view that computational explanations are full-fledged mechanistic explanations. Finally, we argue that both pictures are consistent with the reinforcement learning example, but that scientific practice does not align with the view that computational models are merely mechanistic sketches (section 6)

    Integrating computation into the mechanistic hierarchy in the cognitive and neural sciences

    Get PDF
    It is generally accepted that, in the cognitive sciences, there are both computational and mechanistic explanations. We ask how computational explanations can integrate into the mechanistic hierarchy. The problem stems from the fact that implementation and mechanistic relations have different forms. The implementation relation, from the states of an abstract computational system (e.g., an automaton) to the physical, implementing states is a homomorphism mapping relation. The mechanistic relation, however, is that of part/whole; the explanans in a mechanistic explanation are components of the explanandum phenomenon. Moreover, each component in one level of mechanism is constituted and explained by components of an underlying level of mechanism. Hence, it seems, computational variables and functions cannot be mechanistically explained by the medium-dependent properties that implement them. How then, do the computational and implementational properties integrate to create the mechanistic hierarchy? After explicating the general problem (section 2), we further demonstrate it through a concrete example, of reinforcement learning, in cognitive neuroscience (sections 3 and 4). We then examine two possible solutions (section 5). On one solution, the mechanistic hierarchy embeds at the same levels computational and implementational properties. This picture fits with the view that computational explanations are mechanism sketches. On the other solution, there are two separate hierarchies, one computational and another implementational, which are related by the implementation relation. This picture fits with the view that computational explanations are functional and autonomous explanations. It is less clear how these solutions fit with the view that computational explanations are full-fledged mechanistic explanations. Finally, we argue that both pictures are consistent with the reinforcement learning example, but that scientific practice does not align with the view that computational models are merely mechanistic sketches (section 6)

    On Two Different Kinds of Computational Indeterminacy

    Get PDF
    It is often indeterminate what function a given computational system computes. This phenomenon has been referred to as “computational indeterminacy” or “multiplicity of computations”. In this paper, we argue that what has typically been considered and referred to as the (unique) challenge of computational indeterminacy in fact subsumes two distinct phenomena, which are typically bundled together and should be teased apart. One kind of indeterminacy concerns a functional (or formal) characterization of the system’s relevant behavior (briefly: how its physical states are grouped together and corresponded to abstract states). Another kind concerns the manner in which the abstract (or computational) states are interpreted (briefly: what function the system computes). We discuss the similarities and differences between the two kinds of computational indeterminacy, their implications for certain accounts of “computational individuation” in the literature, and their relevance to different levels of description within the computational system. We also examine the interrelationships between our proposed accounts of the two kinds of indeterminacy and the main accounts of “computational implementation”

    Zuse's thesis, Gandy's thesis, and Penrose's thesis

    Get PDF

    Marr’s Computational Level and Delineating Phenomena

    Get PDF
    A key component of scientific inquiry, especially inquiry devoted to developing mechanistic explanations, is delineating the phenomenon to be explained. The task of delineating phenomena, however, has not been sufficiently analyzed, even by the new mechanistic philosophers of science. We contend that Marr’s characterization of what he called the computational level (CL) provides a valuable resource for understanding what is involved in delineating phenomena. Unfortunately, the distinctive feature of Marr’s computational level, his dual emphasis on both what is computed and why it is computed, has not been appreciated in philosophical discussions of Marr. Accordingly we offer a distinctive account of CL. This then allows us to develop two important points about delineating phenomena. First, the accounts of phenomena that figure in explanatory practice are typically not qualitative but precise, formal or mathematical, representations. Second, delineating phenomena requires consideration of the demands the environment places on the mechanism—identifying, as Marr put it, the basis of the computed function in the world. As valuable as Marr’s account of CL is in characterizing phenomena, we contend that ultimately he did not go far enough. Determining the relevant demands of the environment on the mechanism often requires detailed empirical investigation. Moreover, often phenomena are reconstituted in the course of inquiry on the mechanism itself

    Structural Representations and the Brain

    No full text

    Review of Physical Computation: A Mechanistic Account

    No full text
    corecore